Artificial Intelligence

Myths, meaning, and the middle ground

Posted by Julien on June 19, 2025

Artificial Intelligence (AI) is no longer just a buzzword for startups or sci-fi films. It’s embedded in our tools, our policies, our fears, and our hopes. From political speeches in Europe to podcasts from Silicon Valley, AI has become a defining topic of our era — both celebrated and feared.

But what is AI really? Is it coming for your job, your creativity, your data… your soul? Is it going to save the world, or drain the power grid while trying?

Let’s take a step back. Let’s unpack what AI actually is, bust some common myths, explore its risks — and look at how voices like Ben Affleck, Sam Altman, Trevor Noah, Emmanuel Macron, Simon Sinek, Brad Smith, and Jensen Huang are helping us reframe the conversation.

What is AI — Really?

In tech companies, leadership typically manifests in two key forms:

Artificial Intelligence is not consciousness, and it’s not some all-knowing mind. At its core, AI is a set of algorithms trained on data to find patterns, generate outputs, and perform tasks — often at speed and scale that humans can’t match.

There are three main levels of AI:

  • Narrow AI – like ChatGPT, image classifiers, or recommendation systems. These are good at one thing, but only that.
  • General AI (AGI) – hypothetical systems that could reason and learn across domains like humans.
  • Superintelligence – the imagined future AI that’s smarter than all of us combined. (We’re not there.)

In the OpenAI Podcast Episode 1Sam Altman explains the goal of AGI like this:

“We think it’s going to be the most powerful technology humanity has yet invented… and it’s going to require an unprecedented level of coordination and care.”

He’s not wrong. AGI may never come — but the systems we are building today already carry immense influence. Which brings us to…

Sam Altman on AGI, GPT-5, and what’s next — the OpenAI Podcast Ep. 1

Myth #1: “AI will replace all human creativity”

Actor Ben Affleck recently commented on AI and the arts, saying:

“AI doesn’t stand a chance against actors. It can’t do Shakespeare. It doesn’t understand tragedy or humor.”

He’s making a key point: AI may simulate creativity, but it doesn’t experience life. It doesn’t feel heartbreak. It doesn’t suffer loss or revel in joy. It predicts what’s likely to come next based on patterns — but it doesn’t understand.

AI doesn’t stand a chance against actors, or Shakespeare - Ben Affleck

At the Web Summit Lisbon 2023, comedian Trevor Noah echoed this when asked about AI in comedy:

“AI might write jokes, but it doesn’t bomb on stage. It doesn’t adjust based on audience energy. Comedy is chemistry.”

The best creative work is emotional, not just structural. AI can help creators — but it can’t replace the human condition.

AI and entertainment with Trevor Noah - Web Summit Lisbon 2023

Myth #2: “AI has no purpose — It’s just hype”

Simon Sinek pushes back on this cynicism. When asked about AI he reframes the issue:

“The question isn’t what AI can do, but what we want it to do.”

In other words: AI is a tool, not a goal. It reflects the values of those who build and use it. It can be used to optimize advertising clicks, or to detect early signs of cancer. It can reinforce bias or dismantle it.

The real danger isn’t AI itself — it’s using it with no clarity of purpose.

Simon's thoughts on the purpose of AI

Myth #3: “Governments are behind — AI is a Wild West”

Yes, regulation is lagging. But it’s not absent.

In 2024, French President Emmanuel Macron visited Portugal to meet with French-Portuguese tech leaders. His message was clear: Europe must lead in ethical, sovereign AI. France is investing in secure infrastructure, talent, and energy strategies to support sustainable AI growth.

Macron emphasized not just ethics — but energy:

“If we want digital sovereignty and climate responsibility, we must produce massive amounts of clean electricity.”

That electricity, in Macron’s vision, comes from nuclear power — a strategic move to decarbonize Europe’s cloud and AI infrastructure. It’s a clear sign that the future of AI isn’t just about models — it’s about national infrastructure.

Au Portugal, le Président Emmanuel Macron rencontre des acteurs de la Tech franco-portugaise

Myth #4: “AI will destroy us”

Headlines love dystopia. But many AI leaders disagree with the doomsday narrative.

Brad Smith, President of Microsoft, offered a more grounded take when interviewed on the Trevor Noah podcast:

The most dangerous thing isn’t AI — it’s unregulated humans using AI irresponsibly.”

He advocates for legal frameworks, auditability, and safety protocols. The tech must evolve — but so must governance, education, and business ethics.

Altman, too, emphasized alignment in his podcast:

“It’s not enough to build powerful systems. We have to make sure they align with human values, or they shouldn’t be released at all.”

The cost of intelligence: AI and energy

Let’s talk about the elephant in the data center: energy.

AI is compute-hungry. Training models like GPT-4 or GPT-5 can take millions of kilowatt-hours, and inference at global scale adds ongoing load. This raises critical questions about emissions, electricity, and sustainability.

Sam Altman has been vocal about OpenAI’s need for energy innovation:

“The world will need a lot more power, not less. We are actively investing in long-term solutions like fusion and sustainable compute.”

But this isn’t just a Silicon Valley problem. It’s a geopolitical one.

Emmanuel Macron sees AI and energy as inseparable — and he’s betting on nuclear power to support the European AI economy. His strategy is to build a digital industrial base that’s both energy-resilient and climate-aligned.

Then there’s Jensen Huang, CEO of NVIDIA — the company whose chips power most AI systems. At Computex 2025, he introduced the concept of the AI factory:

“AI is now infrastructure… These data centers are not warehouses of data. They are factories that take in data and produce intelligence.”

He also pushed for hardware efficiency, stating:

“Accelerated computing is sustainable computing.”

NVIDIA’s GPUs, according to Huang, deliver up to 25× more performance per watt than CPUs — a critical point as energy use scales.

Real risks — and the real solutions

AI isn’t inherently evil. But it isn’t harmless either. Here are the real risks we face:

  1. Bias
    Models trained on biased data can perpetuate discrimination in hiring, justice, and healthcare.
  2. Disinformation
    Deepfakes and generative text can spread false narratives at massive scale.
  3. Job Displacement
    Automation will change the labor market. Some jobs will go. New ones will emerge — but not without disruption.
  4. Surveillance
    AI is used for tracking, profiling, and mass data analysis — often without consent or transparency.
  5. Energy & Climate
    Without a sustainable energy transition, AI could become a major source of emissions and grid stress.

These risks are not caused by AI. They are caused by humans misusing or misunderstanding AI.

AI is a mirror, not a monster

AI is not here to replace us — it’s here to reflect us.+

If we build it thoughtfully, it can become a tool for empowerment. If we rush or ignore its consequences, it can amplify inequality, accelerate climate collapse, and deepen mistrust in institutions.

As Trevor Noah said:

“The real problem isn’t that AI will become human. It’s that some humans will become like AI — unfeeling, unaccountable, and optimized only for profit.”

Conclusion: embrace the tool, guard the future

AI is not Shakespeare. It is not Skynet. It’s closer to a spreadsheet on steroids — powerful, scalable, but only as meaningful as the person using it.

To build a future worth living in, we must:

  • Design with purpose, as Simon Sinek says.
  • Create sustainable infrastructure, like Jensen Huang urges.
  • Invest in clean energy, like Macron champions.
  • Set clear ethical guardrails, as Brad Smith recommends.
  • Ensure global collaboration, as Sam Altman insists.
  • And most importantly: Stay human, as Ben Affleck and Trevor Noah remind us.

This is not the age of artificial intelligence.
It’s the age of augmented intelligence — if we choose to make it so.